The proportion of time an animal is in a feeding behavioral state.
Process Model
\[Y_{i,t+1} \sim Multivariate Normal(d_{i,t},σ)\]
\[d_{i,t}= Y_{i,t} + γ_{s_{i,g,t}}*T_{i,g,t}*( Y_{i,g,t}- Y_{i,g,t-1} )\]
\[ \begin{matrix} \alpha_{i,1,1} & \beta_{i,1,1} & 1-(\alpha_{i,1,1} + \beta_{i,1,1}) \\ \alpha_{i,2,1} & \beta_{i,2,1} & 1-(\alpha_{i,2,1} + \beta_{i,2,1}) \\ \alpha_{i,3,1} & \beta_{i,3,1} & 1-(\alpha_{i,3,1} + \beta_{i,3,1}) \\ \end{matrix} \] \[logit(\phi_{Behavior}) = \alpha_{Behavior_{t-1}} \] The behavior at time t of individual i on track g is a discrete draw. \[S_{i,g,t} \sim Cat(\phi_{traveling},\phi_{foraging},\phi_{resting})\]
Dive information is a mixture model based on behavior (S)
\(\text{Average dive depth}(\psi)\) \[ DiveDepth \sim Normal(dive_{\mu_S},dive_{\tau_S})\]
Dive Profiles with Argos timestamps
Dive autocorrelation plots
## # A tibble: 11 x 4
## Animal n argos dive
## <int> <int> <int> <int>
## 1 131111 548 173 375
## 2 131115 1029 179 850
## 3 131116 1932 457 1475
## 4 131127 9275 2495 6780
## 5 131128 112 52 60
## 6 131130 1165 151 1014
## 7 131132 2292 589 1703
## 8 131133 8496 2077 6419
## 9 131134 2098 653 1445
## 10 131136 6844 1970 4874
## 11 154187 1866 486 1380
## [1] 35657
## # A tibble: 11 x 2
## Animal n
## <int> <int>
## 1 131111 454
## 2 131115 1005
## 3 131116 1892
## 4 131127 8224
## 5 131128 99
## 6 131130 483
## 7 131132 1545
## 8 131133 6873
## 9 131134 1870
## 10 131136 6092
## 11 154187 1836
## # A tibble: 11 x 2
## Animal Tracks
## <int> <int>
## 1 131111 1
## 2 131115 2
## 3 131116 1
## 4 131127 29
## 5 131128 1
## 6 131130 2
## 7 131132 5
## 8 131133 34
## 9 131134 6
## 10 131136 15
## 11 154187 2
sink(“Bayesian/NestedDive.jags”) cat(" model{
pi <- 3.141592653589
#for each if 6 argos class observation error
for(x in 1:6){
##argos observation error##
argos_prec[x,1:2,1:2] <- argos_cov[x,,]
#Constructing the covariance matrix
argos_cov[x,1,1] <- argos_sigma[x]
argos_cov[x,1,2] <- 0
argos_cov[x,2,1] <- 0
argos_cov[x,2,2] <- argos_alpha[x]
}
for(i in 1:ind){
for(g in 1:tracks[i]){
## Priors for first true location
#for lat long
y[i,g,1,1:2] ~ dmnorm(argos[i,g,1,1,1:2],argos_prec[1,1:2,1:2])
#First movement - random walk.
y[i,g,2,1:2] ~ dmnorm(y[i,g,1,1:2],iSigma)
###First Behavioral State###
state[i,g,1] ~ dcat(lambda[]) ## assign state for first obs
#Process Model for movement
for(t in 2:(steps[i,g]-1)){
#Behavioral State at time T
phi[i,g,t,1] <- alpha[state[i,g,t-1]]
phi[i,g,t,2] <- 1-phi[i,g,t,1]
state[i,g,t] ~ dcat(phi[i,g,t,])
#Turning covariate
#Transition Matrix for turning angles
T[i,g,t,1,1] <- cos(theta[state[i,g,t]])
T[i,g,t,1,2] <- (-sin(theta[state[i,g,t]]))
T[i,g,t,2,1] <- sin(theta[state[i,g,t]])
T[i,g,t,2,2] <- cos(theta[state[i,g,t]])
#Correlation in movement change
d[i,g,t,1:2] <- y[i,g,t,] + gamma[state[i,g,t]] * T[i,g,t,,] %*% (y[i,g,t,1:2] - y[i,g,t-1,1:2])
#Gaussian Displacement in location
y[i,g,t+1,1:2] ~ dmnorm(d[i,g,t,1:2],iSigma)
}
#Final behavior state
phi[i,g,steps[i,g],1] <- alpha[state[i,g,steps[i,g]-1]]
phi[i,g,steps[i,g],2] <- 1-phi[i,g,steps[i,g],1]
state[i,g,steps[i,g]] ~ dcat(phi[i,g,steps[i,g],])
## Measurement equation - irregular observations
# loops over regular time intervals (t)
for(t in 2:steps[i,g]){
#first substate
sub_state[i,g,t,1] ~ dcat(sub_lambda[state[i,g,t]])
# loops over observed dive within interval t
for(u in 2:idx[i,g,t]){
#Substate, resting or foraging dives?
sub_phi[i,g,t,u,1] <- sub_alpha[state[i,g,t],sub_state[i,g,t,u-1]]
sub_phi[i,g,t,u,2] <- 1-sub_phi[i,g,t,u,1]
sub_state[i,g,t,u] ~ dcat(sub_phi[i,g,t,u,])
}
# loops over observed locations within interval t
for(u in 1:idx[i,g,t]){
zhat[i,g,t,u,1:2] <- (1-j[i,g,t,u]) * y[i,g,t-1,1:2] + j[i,g,t,u] * y[i,g,t,1:2]
#for each lat and long
#argos error
argos[i,g,t,u,1:2] ~ dmnorm(zhat[i,g,t,u,1:2],argos_prec[argos_class[i,g,t,u],1:2,1:2])
#for each dive depth
#dive depth at time t
divedepth[i,g,t,u] ~ dnorm(depth_mu[state[i,g,t],sub_state[i,g,t,u]],depth_tau[state[i,g,t],sub_state[i,g,t,u]])T(0,)
#Assess Model Fit
#Fit dive discrepancy statistics - comment out for memory savings
#eval[i,g,t,u] ~ dnorm(depth_mu[state[i,g,t],sub_state[i,g,t,u]],depth_tau[state[i,g,t],sub_state[i,g,t,u]])T(0,)
#E[i,g,t,u]<-pow((divedepth[i,g,t,u]-eval[i,g,t,u]),2)/(eval[i,g,t,u])
#dive_new[i,g,t,u] ~ dnorm(depth_mu[state[i,g,t],sub_state[i,g,t,u]],depth_tau[state[i,g,t],sub_state[i,g,t,u]])T(0,)
#Enew[i,g,t,u]<-pow((dive_new[i,g,t,u]-eval[i,g,t,u]),2)/(eval[i,g,t,u])
}
}
}
}
###Priors###
#Process Variance
iSigma ~ dwish(R,2)
Sigma <- inverse(iSigma)
##Mean Angle
tmp[1] ~ dbeta(10, 10)
tmp[2] ~ dbeta(10, 10)
# prior for theta in 'traveling state'
theta[1] <- (2 * tmp[1] - 1) * pi
# prior for theta in 'foraging state'
theta[2] <- (tmp[2] * pi * 2)
##Move persistance
# prior for gamma (autocorrelation parameter)
#from jonsen 2016
##Behavioral States
#Movement autocorrelation
gamma[1] ~ dbeta(10,2)
gamma[2] ~ dbeta(2,10)
#Transition Intercepts
alpha[1] ~ dbeta(1,1)
alpha[2] ~ dbeta(1,1)
#Probability of init behavior switching
lambda[1] ~ dbeta(1,1)
lambda[2] <- 1 - lambda[1]
#Probability of init subbehavior switching
sub_lambda[1] ~ dbeta(1,1)
sub_lambda[2] <- 1 - sub_lambda[1]
#Dive Priors
#Foraging dives
depth_mu[2,1] ~ dunif(50,250)
depth_sigma[1] ~ dunif(0,90)
depth_tau[2,1] <- 1/pow(depth_sigma[1],2)
#Resting Dives
depth_mu[2,2] ~ dunif(0,30)
depth_sigma[2] ~ dunif(0,20)
depth_tau[2,2] <- 1/pow(depth_sigma[2],2)
#Traveling Dives
depth_mu[1,1] ~ dunif(0,100)
depth_sigma[3] ~ dunif(0,20)
depth_tau[1,1] <- 1/pow(depth_sigma[3],2)
#Dummy traveling substate
depth_mu[1,2]<-0
depth_tau[1,2]<-0.01
#Sub states
#Traveling has no substate
sub_alpha[1,1]<-1
sub_alpha[1,2]<-0
#ARS has two substates, foraging and resting
#Foraging probability
sub_alpha[2,1] ~ dbeta(1,1)
sub_alpha[2,2] ~ dbeta(1,1)
##Argos priors##
#longitudinal argos precision, from Jonsen 2005, 2016, represented as precision not sd
#by argos class
argos_sigma[1] <- 11.9016
argos_sigma[2] <- 10.2775
argos_sigma[3] <- 1.228984
argos_sigma[4] <- 2.162593
argos_sigma[5] <- 3.885832
argos_sigma[6] <- 0.0565539
#latitidunal argos precision, from Jonsen 2005, 2016
argos_alpha[1] <- 67.12537
argos_alpha[2] <- 14.73474
argos_alpha[3] <- 4.718973
argos_alpha[4] <- 0.3872023
argos_alpha[5] <- 3.836444
argos_alpha[6] <- 0.1081156
}"
,fill=TRUE)
sink()
##
## Processing function input.......
##
## Done.
##
## Compiling model graph
## Resolving undeclared variables
## Allocating nodes
## Graph information:
## Observed stochastic nodes: 29169
## Unobserved stochastic nodes: 61143
## Total graph size: 1632161
##
## Initializing model
##
## Adaptive phase.....
## Adaptive phase complete
##
##
## Burn-in phase, 9000 iterations x 2 chains
##
##
## Sampling from joint posterior, 1000 iterations x 2 chains
##
##
## Calculating statistics.......
##
## Done.
## user system elapsed
## 33256.967 613.608 35330.266
## [1] "model complete"
## # A tibble: 3 x 3
## # Groups: state [?]
## state sub_state n
## <dbl> <dbl> <int>
## 1 1 1 236
## 2 2 1 631
## 3 2 2 112
Grouped by stage
Lines connect individuals
Lines connect individuals
The goodness of fit is a measured as chi-squared. The expected value is compared to the observed value of the actual data. In addition, a replicate dataset is generated from the posterior predicted intensity. Better fitting models will have lower discrepancy values and be Better fitting models are smaller values and closer to the 1:1 line. A perfect model would be 0 discrepancy. This is unrealsitic given the stochasticity in the sampling processes. Rather, its better to focus on relative discrepancy. In addition, a model with 0 discrepancy would likely be seriously overfit and have little to no predictive power.